Frontal cortex selects representations of the talker’s mouth to aid in speech perception
نویسندگان
چکیده
Human faces contain multiple sources of information. During speech perception, visual information from the talker's mouth is integrated with auditory information from the talker's voice. By directly recording neural responses from small populations of neurons in patients implanted with subdural electrodes, we found enhanced visual cortex responses to speech when auditory speech was absent (rendering visual speech especially relevant). Receptive field mapping demonstrated that this enhancement was specific to regions of the visual cortex with retinotopic representations of the mouth of the talker. Connectivity between frontal cortex and other brain regions was measured with trial-by-trial power correlations. Strong connectivity was observed between frontal cortex and mouth regions of visual cortex; connectivity was weaker between frontal cortex and non-mouth regions of visual cortex or auditory cortex. These results suggest that top-down selection of visual information from the talker's mouth by frontal cortex plays an important role in audiovisual speech perception.
منابع مشابه
Effects of mouth-only and whole-face displays on audio-visual speech perception in noise: is the vision of a talker's full face truly the most efficient solution?
The goal of the present study was to establish the nature of visual input (featural vs holistic) and the mode of its presentation that facilitates best audio-visual speech perception. Sixteen participants were asked to repeat acoustically strongly and mildly degraded syllables, presented in auditory and three audio-visual conditions, within which one contained holistic and two contained featura...
متن کاملA functional-anatomical model for lipreading.
Regional cerebral blood flow (rCBF) PET scans were used to study the physiological bases of lipreading, a natural skill of extracting language from mouth movements, which contributes to speech perception in everyday life. Viewing connected mouth movements that could not be lexically identified and that evoke perception of isolated speech sounds (nonlexical lipreading) was associated with bilate...
متن کاملContributions of local speech encoding and functional connectivity to audio-visual speech perception
Seeing a speaker's face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was str...
متن کاملHierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that dif...
متن کاملSingle word reading in developmental stutterers and fluent speakers.
Ten fluent speakers and nine developmental stutterers read isolated nouns aloud in a delayed reading paradigm. Cortical activation sequences were mapped with a whole-head magnetoencephalography system. The stutterers were mostly fluent in this task. Although the overt performance was essentially identical in the two groups, the cortical activation patterns showed clear differences, both in the ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره 7 شماره
صفحات -
تاریخ انتشار 2018